Your AI just made something up. Would you have caught it?
ChatGPT, Gemini, and Claude sound confident — even when they’re wrong. Lenz verifies AI-generated claims against real evidence in seconds.
Why AI needs verification
AI language models like ChatGPT, Claude, and Gemini are incredible tools — but they’re not perfect. They can:
- Hallucinate facts — Confidently state things that aren’t true
- Cite fake sources — Reference studies or articles that don’t exist
- Mix up dates and numbers — Get statistics wrong while sounding authoritative
- Repeat outdated information — Their training data has cutoff dates
The problem? AI sounds confident even when it’s wrong. And most people don’t have time to verify every claim manually.
How Lenz verifies AI output
- Extract the claim — Paste AI output or type the specific claim
- Evidence search — Lenz searches trusted sources (scientific journals, government data, verified references)
- Confidence scoring — Get a rated verdict (True, Mostly True, Misleading, or False)
- See the sources — Review the actual evidence used in the analysis
Think of it as AI verifying AI — with transparent sourcing.
When to verify AI
Research & writing
Verifying claims before you cite them in articles, reports, or presentations.
For publishers →Learning & education
Double-checking AI tutors’ explanations before you trust them for homework or exams.
For students →Professional work
Validating AI-generated statistics before you present them to clients or stakeholders.
Verify before sharing →Real example: AI vs. reality
“The ABC Conjecture was proven by Shinichi Mochizuki in 2012 and widely accepted by the mathematical community.”
Misleading (4/10)
Mochizuki published a claimed proof in 2012, but it remains unverified and controversial. The mathematical community has NOT widely accepted it as of 2026.
Browse verified claims in the Lenz library.
Tips for spotting AI hallucinations
- Check specific claims separately — AI often buries one wrong fact inside a paragraph of correct ones. Isolate and verify each claim.
- Be skeptical of citations — AI frequently invents author names, journal titles, and DOIs. Always click through to the original source.
- Watch for confident language — Phrases like “studies show” or “experts agree” don’t mean the AI actually found those studies or experts.
- Cross-check numbers — Statistics, dates, and percentages are where AI models fail most often. Verify any number that matters.
- Use Lenz as your second opinion — Paste the claim, get an evidence-backed verdict, and see the real sources.
Frequently asked questions
Can Lenz verify any AI tool?
How is Lenz different from asking AI to verify itself?
What if the claim is too specific or niche?
Is Lenz free?
What is an AI hallucination?
Stop trusting AI blindly. Verify first.
Your AI said something confident. But is it actually true? Paste the claim and get a sourced, scored answer in seconds.
Start verifying AI claims